skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Cuadra, Andrea"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Ali, Raian; Lugrin, Birgit; Charles, Fred (Ed.)
    The widespread adoption of intelligent voice assistants (IVAs), like Amazon’s Alexa or Google’s Assistant, presents new opportunities for designers of persuasive technologies to explore how to support people’s behavior change goals and habits with voice technology. In this work, we explore how to use planning prompts, a technique from behavior science to make specific and effective plans, with IVAs. We design and conduct usability testing (N = 13) on a voice app called Planning Habit that encourages users to formulate daily plans out loud. We identify strategies that make it possible to successfully adapt planning prompts to voice format. We then conduct a week-long online deployment (N = 40) of the voice app in the context of daily productivity. Overall, we find that traditional forms of planning prompts can be adapted to and enhanced by IVA technology. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    People interacting with voice assistants are often frustrated by voice assistants' frequent errors and inability to respond to backchannel cues. We introduce an open-source video dataset of 21 participants' interactions with a voice assistant, and explore the possibility of using this dataset to enable automatic error recognition to inform self-repair. The dataset includes clipped and labeled videos of participants' faces during free-form interactions with the voice assistant from the smart speaker's perspective. To validate our dataset, we emulated a machine learning classifier by asking crowdsourced workers to recognize voice assistant errors from watching soundless video clips of participants' reactions. We found trends suggesting it is possible to determine the voice assistant's performance from a participant's facial reaction alone. This work posits elicited datasets of interactive responses as a key step towards improving error recognition for repair for voice assistants in a wide variety of applications. 
    more » « less